翻訳と辞書
Words near each other
・ Multi-access key
・ Multi-adjoint logic programming
・ Multi-age classroom
・ Multi-agency coordination
・ Multi-Agency Geographic Information for the Countryside
・ Multi-Agency Public Protection Arrangement
・ Multi-agent planning
・ Multi-Agent Programming Contest
・ Multi-agent system
・ Multi-amp guitar rigs
・ Multi-angle Imaging SpectroRadiometer
・ Multi-antimicrobial extrusion protein
・ Multi-anvil press
・ Multi-Application Survivable Tether
・ Multi-area agreement
Multi-armed bandit
・ Multi-attribute global inference of quality
・ Multi-attribute utility
・ Multi-axle bus
・ Multi-band device
・ Multi-Band Excitation
・ Multi-barrier approach
・ Multi-bearer network
・ Multi-booting
・ Multi-boxing
・ Multi-carrier code division multiple access
・ Multi-channel analytics
・ Multi-channel app development
・ Multi-channel length
・ Multi-channel memory architecture


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Multi-armed bandit : ウィキペディア英語版
Multi-armed bandit

In probability theory, the multi-armed bandit problem (sometimes called the ''K''- or ''N''-armed bandit problem) is a problem in which a gambler at a row of slot machines (sometimes known as "one-armed bandits") has to decide which machines to play, how many times to play each machine and in which order to play them. When played, each machine provides a random reward from a distribution specific to that machine. The objective of the gambler is to maximize the sum of rewards earned through a sequence of lever pulls.〔〔
Robbins in 1952, realizing the importance of the problem, constructed convergent population selection strategies in "some aspects of the sequential design of experiments".
A theorem, the Gittins index published first by John C. Gittins gives an optimal policy in the Markov setting for maximizing the expected discounted reward.
In practice, multi-armed bandits have been used to model the problem of managing research projects in a large organization, like a science foundation or a pharmaceutical company. Given a fixed budget, the problem is to allocate resources among the competing projects, whose properties are only partially known at the time of allocation, but which may become better understood as time passes.〔〔
In early versions of the multi-armed bandit problem, the gambler has no initial knowledge about the machines. The crucial tradeoff the gambler faces at each trial is between "exploitation" of the machine that has the highest expected payoff and "exploration" to get more information about the expected payoffs of the other machines. The trade-off between exploration and exploitation is also faced in reinforcement learning.
==Empirical motivation==
The multi-armed bandit problem models an agent that simultaneously attempts to acquire new knowledge (called "exploration") and optimize his or her decisions based on existing knowledge (called "exploitation"). The agent attempts to balance these competing tasks in order to maximize his or her total value over the period of time considered. There are many practical applications of the bandit model, for example:
* clinical trials investigating the effects of different experimental treatments while minimizing patient losses,〔〔〔〔Press (1986)〕
* adaptive routing efforts for minimizing delays in a network,
* portfolio design,〔
* Borrowing money from family / friends.
In these practical examples, the problem requires balancing reward maximization based on the knowledge already acquired with attempting new actions to further increase knowledge. This is known as the ''exploitation vs. exploration tradeoff'' in reinforcement learning.
The model has also been used to control dynamic allocation of resources to different projects, answering the question of which project to work on, given uncertainty about the difficulty and payoff of each possibility.〔Farias and Madan (2011)〕
Originally considered by Allied scientists in World War II, it proved so intractable that, according to Peter Whittle, the problem was proposed to be dropped over Germany so that German scientists "could also waste their time on it".〔
The version of the problem now commonly analyzed was formulated by Herbert Robbins in 1952.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Multi-armed bandit」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.